43 research outputs found
Navigation behavior design and representations for a people aware mobile robot system
There are millions of robots in operation around the world today, and almost all of them operate on factory floors in isolation from people. However, it is now becoming clear that robots can provide much more value assisting people in daily tasks in human environments. Perhaps the most fundamental capability for a mobile robot is navigating from one location to another. Advances in mapping and motion planning research in the past decades made indoor navigation a commodity for mobile robots. Yet, questions remain on how the robots should move around humans. This thesis advocates the use of semantic maps and spatial rules of engagement to enable non-expert users to effortlessly interact with and control a mobile robot. A core concept explored in this thesis is the Tour Scenario, where the task is to familiarize a mobile robot to a new environment after it is first shipped and unpacked in a home or office setting. During the tour, the robot follows the user and creates a semantic representation of the environment. The user labels objects, landmarks and locations by performing pointing gestures and using the robot's user interface. The spatial semantic information is meaningful to humans, as it allows providing commands to the robot such as ``bring me a cup from the kitchen table". While the robot is navigating towards the goal, it should not treat nearby humans as obstacles and should move in a socially acceptable manner. Three main navigation behaviors are studied in this work. The first behavior is the point-to-point navigation. The navigation planner presented in this thesis borrows ideas from human-human spatial interactions, and takes into account personal spaces as well as reactions of people who are in close proximity to the trajectory of the robot. The second navigation behavior is person following. After the description of a basic following behavior, a user study on person following for telepresence robots is presented. Additionally, situation awareness for person following is demonstrated, where the robot facilitates tasks by predicting the intent of the user and utilizing the semantic map. The third behavior is person guidance. A tour-guide robot is presented with a particular application for visually impaired users.Ph.D
Belief State Planning for Autonomously Navigating Urban Intersections
Urban intersections represent a complex environment for autonomous vehicles
with many sources of uncertainty. The vehicle must plan in a stochastic
environment with potentially rapid changes in driver behavior. Providing an
efficient strategy to navigate through urban intersections is a difficult task.
This paper frames the problem of navigating unsignalized intersections as a
partially observable Markov decision process (POMDP) and solves it using a
Monte Carlo sampling method. Empirical results in simulation show that the
resulting policy outperforms a threshold-based heuristic strategy on several
relevant metrics that measure both safety and efficiency.Comment: 6 pages, 6 figures, accepted to IV201
AR Point&Click: An Interface for Setting Robot Navigation Goals
This paper considers the problem of designating navigation goal locations for
interactive mobile robots. We propose a point-and-click interface, implemented
with an Augmented Reality (AR) headset. The cameras on the AR headset are used
to detect natural pointing gestures performed by the user. The selected goal is
visualized through the AR headset, allowing the users to adjust the goal
location if desired. We conduct a user study in which participants set
consecutive navigation goals for the robot using three different interfaces: AR
Point & Click, Person Following and Tablet (birdeye map view). Results show
that the proposed AR Point&Click interface improved the perceived accuracy,
efficiency and reduced mental load compared to the baseline tablet interface,
and it performed on-par to the Person Following method. These results show that
the AR Point\&Click is a feasible interaction model for setting navigation
goals.Comment: 6 Pages, 5 Figures, 4 Table
Guided Curriculum Learning for Walking Over Complex Terrain
Reliable bipedal walking over complex terrain is a challenging problem, using
a curriculum can help learning. Curriculum learning is the idea of starting
with an achievable version of a task and increasing the difficulty as a success
criteria is met. We propose a 3-stage curriculum to train Deep Reinforcement
Learning policies for bipedal walking over various challenging terrains. In the
first stage, the agent starts on an easy terrain and the terrain difficulty is
gradually increased, while forces derived from a target policy are applied to
the robot joints and the base. In the second stage, the guiding forces are
gradually reduced to zero. Finally, in the third stage, random perturbations
with increasing magnitude are applied to the robot base, so the robustness of
the policies are improved. In simulation experiments, we show that our approach
is effective in learning walking policies, separate from each other, for five
terrain types: flat, hurdles, gaps, stairs, and steps. Moreover, we demonstrate
that in the absence of human demonstrations, a simple hand designed walking
trajectory is a sufficient prior to learn to traverse complex terrain types. In
ablation studies, we show that taking out any one of the three stages of the
curriculum degrades the learning performance.Comment: Submitted to Australasian Conference on Robotics and Automation
(ACRA) 202